Collaborative Web Crawler over High-speed Research Network

نویسندگان

  • Shisanu Tongchim
  • Canasai Kruengkrai
  • Virach Sornlertlamvanich
  • Hitoshi Isahara
چکیده

This paper proposes an idea for constructing a distributed web crawler by utilizing existing high-speed research networks. This is an initial effort of the Web Language Engineering (WLE) project which investigates techniques in processing the languages found in published web documents. In this paper, we focus on designing a geographically distributed web crawler. Multiple crawlers work collaboratively and use an existing research network for communication. Two methods for distributing the workload of web crawling are investigated. The first one divides the workload among these crawlers by selecting the nearest crawler to retrieve each domain, while the second one distributes the crawling task by using a hash function. As a prototype, we implement two crawlers, one is located in Thailand while another one is located in Japan. Some statistics about the prototype crawler are shown. In the end, we discuss about the use of collaborative crawling for the real-time monitoring of language usage in web documents.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distributed High-Performance Web Crawler Based on Peer-to-Peer Network

Distributing the crawling activity among multiple machines can distribute processing to reduce the analysis of web page. This paper presents the design of a distributed web crawler based on Peer-to-Peer network. The distributed crawler harnesses the excess bandwidth and computing resources of nodes in system to crawl the web. Each crawler is deployed in a computing node of P2P to analyze web pa...

متن کامل

Analysis of Statistical Hypothesis based Learning Mechanism for Faster Crawling

The growth of world-wide-web (WWW) spreads its wings from an intangible quantities of web-pages to a gigantic hub of web information which gradually increases the complexity of crawling process in a search engine. A search engine handles a lot of queries from various parts of this world, and the answers of it solely depend on the knowledge that it gathers by means of crawling. The information s...

متن کامل

Design and Implementation of a High-Performance Distributed Web Crawler

Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits m...

متن کامل

IglooG: A Distributed Web Crawler Based on Grid Service

Web crawler is program used to download documents from the web site. This paper presents the design of a distributed web crawler on grid platform. This distributed web crawler is based on our previous work Igloo. Each crawler is deployed as grid service to improve the scalability of the system. Information services in our system are in charge of distributing URLs to balance the loads of the cra...

متن کامل

Faster and Efficient Web Crawling with Parallel Migrating Web Crawler

A Web crawler is a module of a search engine that fetches data from various servers. Web crawlers are an essential component to search engines; running a web crawler is a challenging task. It is a time-taking process to gather data from various sources around the world. Such a single process faces limitations on the processing power of a single machine and one network connection. This module de...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006